Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
ABSTRACT Glacial cycles during the Pleistocene had profound impacts on local environments and climatic conditions. In North America, some regions that currently support diverse biomes were entirely covered by ice sheets, while other regions were environmentally unsuitable for the organisms that live there now. Organisms that occupy these regions in the present day must have expanded or dispersed into these regions since the last glacial maximum, leading to the possibility that species with similar geographic distributions may show temporally concordant population size changes associated with these warming trends. We examined 17 lineages from 9 eastern North American snake species and species complexes to test for a signal of temporally concordant coexpansion using a machine learning approach. We found that the majority of lineages show population size increases towards the present, with evidence for coexpansion in five out of fourteen lineages, while expansion in others was idiosyncratic. We also examined relationships between genetic distance and current environmental predictors and showed that genomic responses to environmental predictors are not consistent among species. We, therefore, conclude that Pleistocene warming resulted in population size increases in most eastern North American snake species, but variation in environmental preferences and other species‐specific traits results in variance in the exact timing of expansion.more » « less
-
Our knowledge of the biodiversity of Asia and Australasia continues to expand with more focused studies on systematics of various groups and their biogeography. Historically, fluctuating sea levels and cyclic connection and separation of now-disjunct landmasses have been invoked to explain the accumulation of biodiversity via species pump mechanisms. However, recent research has shown that geological shifts of the mainland and species dispersal events may be better explanations of the biodiversity in these regions. We investigate these processes using the poorly studied and geographically widespread Mud Snakes (Serpentes: Homalopsidae) using a target capture approach of ~4,800 nuclear loci from fresh tissues and supplemental mitochondrial data from formalin tissues from museum specimens. We use these datasets to reconstruct the first resolved phylogeny of the group, identify their biogeographic origins, and test hypotheses regarding the roles of sea-level change and habitat selection on their diversification. Divergence dating and ancestral range estimation yielded support for an Oligocene origin and diversification from mainland Southeast Asia and Sundaland in the rear-fanged group ~20 million years ago, followed by eastward and westward dispersal. GeoHiSSE models indicate that niche expansion of ancestral, rear-fanged lineages into aquatic environments did not impact their diversification rates. Our results highlight that Pleistocene sea-level changes and habitat specificity did not primarily lead to the extant species richness of Homalopsidae and that, alternatively, geological shifts in mainland Southeast Asia may have been a major driver of diversity in this group. We also emphasize the importance of using fresh and degraded tissues, and both nuclear and mitochondrial DNA, for filling knowledge gaps in poorly known but highly diverse and conceptually important groups. Here, Homalopsidae represents a non-traditional but effective model study system for understanding transitions between terrestrial, marine, and freshwater environments.more » « less
-
Pupko, Tal (Ed.)Abstract Nearly all current Bayesian phylogenetic applications rely on Markov chain Monte Carlo (MCMC) methods to approximate the posterior distribution for trees and other parameters of the model. These approximations are only reliable if Markov chains adequately converge and sample from the joint posterior distribution. Although several studies of phylogenetic MCMC convergence exist, these have focused on simulated data sets or select empirical examples. Therefore, much that is considered common knowledge about MCMC in empirical systems derives from a relatively small family of analyses under ideal conditions. To address this, we present an overview of commonly applied phylogenetic MCMC diagnostics and an assessment of patterns of these diagnostics across more than 18,000 empirical analyses. Many analyses appeared to perform well and failures in convergence were most likely to be detected using the average standard deviation of split frequencies, a diagnostic that compares topologies among independent chains. Different diagnostics yielded different information about failed convergence, demonstrating that multiple diagnostics must be employed to reliably detect problems. The number of taxa and average branch lengths in analyses have clear impacts on MCMC performance, with more taxa and shorter branches leading to more difficult convergence. We show that the usage of models that include both Γ-distributed among-site rate variation and a proportion of invariable sites is not broadly problematic for MCMC convergence but is also unnecessary. Changes to heating and the usage of model-averaged substitution models can both offer improved convergence in some cases, but neither are a panacea.more » « less
-
Abstract The leakage of quantum information out of the two computational states of a qubit into other energy states represents a major challenge for quantum error correction. During the operation of an error-corrected algorithm, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of the logical error with scale, thus challenging the feasibility of quantum error correction as a path towards fault-tolerant quantum computation. Here, we demonstrate a distance-3 surface code and distance-21 bit-flip code on a quantum processor for which leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a tenfold reduction in the steady-state leakage population of the data qubits encoding the logical state and an average leakage population of less than 1 × 10−3throughout the entire device. Our leakage removal process efficiently returns the system back to the computational basis. Adding it to a code circuit would prevent leakage from inducing correlated error across cycles. With this demonstration that leakage can be contained, we have resolved a key challenge for practical quantum error correction at scale.more » « less
-
We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the 50 Ω environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250–300 MHz with input saturation powers up to −95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimation of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Finally, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.more » « less
-
Abstract Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6logical error per cycle floor set by a single high-energy event (1.6 × 10−7excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.more » « less
An official website of the United States government
